# Contrastive learning optimization
Instruct CLIP
Apache-2.0
InstructCLIP is a model that automatically optimizes data through contrastive learning to enhance instruction-guided image editing.
Text-to-Image English
I
SherryXTChen
74
2
Latentdiffusiondinov2
Apache-2.0
Instruct-CLIP is a contrastive learning-based model designed to optimize instruction-guided image editing tasks.
Image Generation English
L
SherryXTChen
35
0
Instructclip InstructPix2Pix
Apache-2.0
InstructCLIP is an instruction-guided image editing model improved through contrastive learning-based automatic data optimization. It combines CLIP and Stable Diffusion technologies to edit images based on textual instructions.
Text-to-Image English
I
SherryXTChen
450
5
Glucose Base Ja V2
Apache-2.0
General-purpose Japanese text embedding model, optimized for retrieval tasks with excellent performance on CPUs
Text Embedding Japanese
G
pkshatech
25.25k
20
Mmlw Retrieval E5 Large
Apache-2.0
MMLW is a neural text encoder for Polish, optimized for information retrieval tasks, capable of converting queries and passages into 1024-dimensional vectors
Text Embedding
Transformers Other

M
sdadas
56
3
Mmlw Retrieval E5 Small
Apache-2.0
MMLW (I Must Get Better Messages) is a neural text encoder for Polish, optimized for information retrieval tasks, capable of converting queries and passages into 384-dimensional vectors.
Text Embedding
Transformers Other

M
sdadas
34
1
Compositional Bert Large Uncased
Apache-2.0
CompCSE and SimCSE are contrastive learning-based sentence embedding models for calculating sentence similarity.
Text Embedding
Transformers English

C
perceptiveshawty
754
2
Cocodr Base Msmarco
MIT
COCO-DR is a dense retrieval model based on BERT-base, which uses contrastive learning and distributionally robust learning methods to address the distribution shift problem in zero-shot dense retrieval.
Text Embedding
Transformers

C
OpenMatch
18.25k
5
All Datasets V4 MiniLM L12
A sentence embedding model fine-tuned on over 1 billion sentence pairs through self-supervised contrastive learning based on MiniLM-L12, capable of generating high-quality semantic vector representations
Text Embedding English
A
flax-sentence-embeddings
2,084
2
All Datasets V3 MiniLM L6
A sentence embedding model based on MiniLM architecture, trained on over 1 billion sentence pairs through self-supervised contrastive learning, capable of generating high-quality sentence vector representations
Text Embedding English
A
flax-sentence-embeddings
46
0
All Datasets V3 Roberta Large
A RoBERTa-large based sentence embedding model trained on 1 billion sentence pairs through self-supervised contrastive learning for generating semantically rich sentence vector representations
Text Embedding English
A
flax-sentence-embeddings
987
13
All Datasets V3 MiniLM L12
A sentence embedding model based on MiniLM-L12 architecture, trained on over 1 billion sentence pairs through contrastive learning, capable of generating high-quality semantic vector representations
Text Embedding English
A
flax-sentence-embeddings
887
1
All Datasets V4 Mpnet Base
A sentence embedding model based on mpnet-base, trained on 1 billion sentence pairs through self-supervised contrastive learning, capable of generating high-quality semantic vector representations of sentences
Text Embedding English
A
flax-sentence-embeddings
131
6
All Datasets V4 MiniLM L6
A lightweight sentence embedding model based on MiniLM architecture, fine-tuned with contrastive learning on a 1-billion sentence pair dataset, suitable for semantic similarity calculation and information retrieval tasks
Text Embedding English
A
flax-sentence-embeddings
6,550
34
Featured Recommended AI Models